23 research outputs found

    Class-Agnostic Counting

    Full text link
    Nearly all existing counting methods are designed for a specific object class. Our work, however, aims to create a counting model able to count any class of object. To achieve this goal, we formulate counting as a matching problem, enabling us to exploit the image self-similarity property that naturally exists in object counting problems. We make the following three contributions: first, a Generic Matching Network (GMN) architecture that can potentially count any object in a class-agnostic manner; second, by reformulating the counting problem as one of matching objects, we can take advantage of the abundance of video data labeled for tracking, which contains natural repetitions suitable for training a counting model. Such data enables us to train the GMN. Third, to customize the GMN to different user requirements, an adapter module is used to specialize the model with minimal effort, i.e. using a few labeled examples, and adapting only a small fraction of the trained parameters. This is a form of few-shot learning, which is practical for domains where labels are limited due to requiring expert knowledge (e.g. microbiology). We demonstrate the flexibility of our method on a diverse set of existing counting benchmarks: specifically cells, cars, and human crowds. The model achieves competitive performance on cell and crowd counting datasets, and surpasses the state-of-the-art on the car dataset using only three training images. When training on the entire dataset, the proposed method outperforms all previous methods by a large margin.Comment: Asian Conference on Computer Vision (ACCV), 201

    Evaluation of methods for detection of fluorescence labeled subcellular objects in microscope images

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Several algorithms have been proposed for detecting fluorescently labeled subcellular objects in microscope images. Many of these algorithms have been designed for specific tasks and validated with limited image data. But despite the potential of using extensive comparisons between algorithms to provide useful information to guide method selection and thus more accurate results, relatively few studies have been performed.</p> <p>Results</p> <p>To better understand algorithm performance under different conditions, we have carried out a comparative study including eleven spot detection or segmentation algorithms from various application fields. We used microscope images from well plate experiments with a human osteosarcoma cell line and frames from image stacks of yeast cells in different focal planes. These experimentally derived images permit a comparison of method performance in realistic situations where the number of objects varies within image set. We also used simulated microscope images in order to compare the methods and validate them against a ground truth reference result. Our study finds major differences in the performance of different algorithms, in terms of both object counts and segmentation accuracies.</p> <p>Conclusions</p> <p>These results suggest that the selection of detection algorithms for image based screens should be done carefully and take into account different conditions, such as the possibility of acquiring empty images or images with very few spots. Our inclusion of methods that have not been used before in this context broadens the set of available detection methods and compares them against the current state-of-the-art methods for subcellular particle detection.</p

    Analysis of Spatial Point Patterns in Nuclear Biology

    Get PDF
    There is considerable interest in cell biology in determining whether, and to what extent, the spatial arrangement of nuclear objects affects nuclear function. A common approach to address this issue involves analyzing a collection of images produced using some form of fluorescence microscopy. We assume that these images have been successfully pre-processed and a spatial point pattern representation of the objects of interest within the nuclear boundary is available. Typically in these scenarios, the number of objects per nucleus is low, which has consequences on the ability of standard analysis procedures to demonstrate the existence of spatial preference in the pattern. There are broadly two common approaches to look for structure in these spatial point patterns. First a spatial point pattern for each image is analyzed individually, or second a simple normalization is performed and the patterns are aggregated. In this paper we demonstrate using synthetic spatial point patterns drawn from predefined point processes how difficult it is to distinguish a pattern from complete spatial randomness using these techniques and hence how easy it is to miss interesting spatial preferences in the arrangement of nuclear objects. The impact of this problem is also illustrated on data related to the configuration of PML nuclear bodies in mammalian fibroblast cells

    Uncertainty-aware estimation of population abundance using machine learning

    Get PDF
    Machine Learning is widely used for mining collections, such as images, sounds, or texts, by classifying their elements into categories. Automatic classification based on supervised learning requires groundtruth datasets for modeling the elements to classify, and for testing the quality of the classification. Because collecting groundtruth is tedious, a method for estimating the potential errors in large datasets based on limited groundtruth is needed. We propose a method that improves classification quality by using limited groundtruth data to extrapolate the po-tential errors in larger datasets. It significantly improves the counting of elements per class. We further propose visualization designs for understanding and evaluating the classification un-certainty. They support end-users in considering the impact of potential misclassifications for interpreting the classification output. This work was developed to address the needs of ecologists studying fish population abundance using computer vision, but generalizes to a larger range of applications. Our method is largely applicable for a variety of Machine Learning technologies, and our visualizations further support their transfer to end-users

    Principles of Bioimage Informatics: Focus on Machine Learning of Cell Patterns

    Full text link
    Abstract. The field of bioimage informatics concerns the development and use of methods for computational analysis of biological images. Traditionally, analysis of such images has been done manually. Manual annotation is, however, slow, expensive, and often highly variable from one expert to another. Furthermore, with modern automated microscopes, hundreds to thousands of images can be collected per hour, making manual analysis infeasible. This field borrows from the pattern recognition and computer vision literature (which contain many techniques for image processing and recognition), but has its own unique challenges and tradeoffs. Fluorescence microscopy images represent perhaps the largest class of biological images for which automation is needed. For this modality, typical problems include cell segmentation, classification of phenotypical response, or decisions regarding differentiated responses (treatment vs. control setting). This overview focuses on the problem of subcellular location determination as a running example, but the techniques discussed are often applicable to other problems.

    Shading correction for whole slide image using low rank and sparse decomposition.

    No full text
    Many microscopic imaging modalities suffer from the problem of intensity inhomogeneity due to uneven illumination or camera nonlinearity, known as shading artifacts. A typical example of this is the unwanted seam when stitching images to obtain a whole slide image (WSI). Elimination of shading plays an essential role for subsequent image processing such as segmentation, registration, or tracking. In this paper, we propose two new retrospective shading correction algorithms for WSI targeted to two common forms of WSI: multiple image tiles before mosaicking and an already-stitched image. Both methods leverage on recent achievements in matrix rank minimization and sparse signal recovery. We show how the classic shading problem in microscopy can be reformulated as a decomposition problem of low-rank and sparse components, which seeks an optimal separation of the foreground objects of interest and the background illumination field. Additionally, a sparse constraint is introduced in the Fourier domain to ensure the smoothness of the recovered background. Extensive qualitative and quantitative validation on both synthetic and real microscopy images demonstrates superior performance of the proposed methods in shading removal in comparison with a well-established method in ImageJ
    corecore